Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
BACKGROUND: Natureculture (Haraway, 2003; Fuentes, 2010) constructs offer a powerful framework for science education to explore learners’ interactions with and understanding of the natural world. Technologies such as Augmented Reality (AR) designed to reveal pets’ sensory worlds and companionship with pets can facilitate learners’ harmonious relationships with significant others in naturecultures. METHODS: At a two-week virtual summer camp, we engaged teens in inquiring into dogs’ and cats’ senses using selective color filters, investigations, experience design projects, and understanding how the umwelt (von Uexküll, 2001) of pets impacts their lives with humans. We qualitatively analyzed participants’ talk, extensive notes, and projects completed at the workshop. FINDINGS: We found that teens engaged in the science and engineering practices of planning and carrying out investigations, constructing explanations and designing solutions, and questioning while investigating specific aspects of their pets’ lives. Further, we found that teens checking and taking pets’ perspectives while caring for them shaped their productive engagement in these practices. The relationship between pets and humans facilitated an ecological and relational approach to science learning. CONTRIBUTION: Our findings suggest that relational practices of caring and perspective-taking coexist with scientific practices and enrich scientific inquiry.more » « less
-
Evaluating the quality of accessible image captions with human raters is difficult, as it may be difficult for a visually impaired user to know how comprehensive a caption is, whereas a sighted assistant may not know what information a user will need from a caption. To explore how image captioners and caption consumers assess caption content, we conducted a series of collaborative captioning sessions in which six pairs, consisting of a blind person and their sighted partner, worked together to discuss, create, and evaluate image captions. By making captioning a collaborative task, we were able to observe captioning strategies, to elicit questions and answers about image captions, and to explore blind users’ caption preferences. Our findings provide insight about the process of creating good captions and serve as a case study for cross-ability collaboration between blind and sighted people.more » « less
-
Drones have become fixtures in commerce, safety efforts, and in homes as a leisure activity. Researchers have started to explore how drones can support people with disabilities in piloting and serve as assistive devices. Our work focuses on people with vision impairment and investigates what motivates them to fly drones. We administered a survey to visually impaired adults that gauged general interest in drone piloting and previous experience with drones. From the 59 survey responses, we interviewed 13 participants to elaborate on how they envision using drones and how different feedback and modes of piloting can make the flying experience more accessible. We found that our participants had overarching interests in aviation, trying new technology, environment exploration, and finding collaborative activities to do with their sighted family members, which extended to an interest in piloting drones. This research helps lay groundwork for design scenarios and accessible features for future drones.more » « less
-
Many images on the Web, including photographs and artistic images, feature spatial relationships between objects that are inaccessible to someone who is blind or visually impaired even when a text description is provided. While some tools exist to manually create accessible image descriptions, this work is time consuming and requires specialized tools. We introduce an approach that automatically creates spatially registered image labels based on how a sighted person naturally interacts with the image. Our system collects behavioral data from sighted viewers of an image, specifically eye gaze data and spoken descriptions, and uses them to generate a spatially indexed accessible image that can then be explored using an audio-based touch screen application. We describe our approach to assigning text labels to locations in an image based on eye gaze. We then report on two formative studies with blind users testing EyeDescribe. Our approach resulted in correct labels for all objects in our image set. Participants were able to better recall the location of objects when given both object labels and spatial locations. This approach provides a new method for creating accessible images with minimum required effort.more » « less
-
Tactile graphics are a common way to present information to people with vision impairments. Tactile graphics can be used to explore a broad range of static visual content but aren’t well suited to representing animation or interactivity. We introduce a new approach to creating dynamic tactile graphics that combines a touch screen tablet, static tactile overlays, and small mobile robots. We introduce a prototype system called RoboGraphics and several proof-of-concept applications. We evaluated our prototype with seven participants with varying levels of vision, comparing the RoboGraphics approach to a flat screen, audio-tactile interface. Our results show that dynamic tactile graphics can help visually impaired participants explore data quickly and accurately.more » « less
-
Feminist science approaches recognize the value of integrating empathy, closeness, subjectivity, and caring into scientific sensemaking. These approaches reject the notion that scientists must be objective and dispassionate, and expand the possibilities of what is considered valuable scientific knowledge. One avenue for engaging people in empathetically driven scientific inquiry is through learning activities about how our pets experience the world. In this study, we developed an augmented reality device we called DoggyVision that lets people see the world similar to dogs’ vision. We designed a scavenger-hunt for families where they explored indoor and outdoor environments with DoggyVision, collected data firsthand, and drew conclusions about the differences between how humans and dogs see the world. In this paper, we illustrate how our DoggyVision workshop re-mediated scientific inquiry and supported the integration of feminist practices into scientific sensemaking.more » « less
-
Computer science education is widely viewed as a path to empowerment for young people, potentially leading to higher education, careers, and development of computational thinking skills. However, few resources exist for people with cognitive disabilities to learn computer science. In this paper, we document our observations of a successful program in which young adults with cognitive disabilities are trained in computing concepts. Through field observations and interviews, we identify instructional strategies used by this group, accessibility challenges encountered by this group, and how instructors and students leverage peer learning to support technical education. Our findings lead to guidelines for developing tools and curricula to support young adults with cognitive disabilities in learning computer science.more » « less
An official website of the United States government
